21 research outputs found

    Ambiguity, Weakness, and Regularity in Probabilistic B\"uchi Automata

    Full text link
    Probabilistic B\"uchi automata are a natural generalization of PFA to infinite words, but have been studied in-depth only rather recently and many interesting questions are still open. PBA are known to accept, in general, a class of languages that goes beyond the regular languages. In this work we extend the known classes of restricted PBA which are still regular, strongly relying on notions concerning ambiguity in classical omega-automata. Furthermore, we investigate the expressivity of the not yet considered but natural class of weak PBA, and we also show that the regularity problem for weak PBA is undecidable

    Flat Model Checking for Counting LTL Using Quantifier-Free Presburger Arithmetic

    Full text link
    This paper presents an approximation approach to verifying counter systems with respect to properties formulated in an expressive counting extension of linear temporal logic. It can express, e.g., that the number of acknowledgements never exceeds the number of requests to a service, by counting specific positions along a run and imposing arithmetic constraints. The addressed problem is undecidable and therefore solved on flat under-approximations of a system. This provides a flexibly adjustable trade-off between exhaustiveness and computational effort, similar to bounded model checking. Recent techniques and results for model-checking frequency properties over flat Kripke structures are lifted and employed to construct a parametrised encoding of the (approximated) problem in quantifier-free Presburger arithmetic. A prototype implementation based on the z3 SMT solver demonstrates the effectiveness of the approach based on problems from the RERS Challange

    Determinization of B\"uchi Automata: Unifying the Approaches of Safra and Muller-Schupp

    Full text link
    Determinization of B\"uchi automata is a long-known difficult problem and after the seminal result of Safra, who developed the first asymptotically optimal construction from B\"uchi into Rabin automata, much work went into improving, simplifying or avoiding Safra's construction. A different, less known determinization construction was derived by Muller and Schupp and appears to be unrelated to Safra's construction on the first sight. In this paper we propose a new meta-construction from nondeterministic B\"uchi to deterministic parity automata which strictly subsumes both the construction of Safra and the construction of Muller and Schupp. It is based on a correspondence between structures that are encoded in the macrostates of the determinization procedures - Safra trees on one hand, and levels of the split-tree, which underlies the Muller and Schupp construction, on the other. Our construction allows for combining the mentioned constructions and opens up new directions for the development of heuristics.Comment: Full version of ICALP 2019 pape

    A basic Helmholtz Kernel Information Profile for machine-actionable FAIR Digital Objects

    Get PDF
    To reach the declared goal of the Helmholtz Metadata Collaboration Platform, making the depth and breadth of research data produced by Helmholtz Centres findable, accessible, interoperable, and reusable (FAIR) for the whole science community, the concept of FAIR Digital Objects (FAIR DOs) has been chosen as top-level commonality across all research fields and their existing and future infrastructures. Over the last years, not only by the Helmholtz Metadata Collaboration Platform, but on an international level, the roads towards realizing FAIR DOs has been paved more and more by concretizing concepts and implementing base services required for realizing FAIR DOs, e.g., different instances of Data Type Registries for accessing, creating, and managing Data Types required by FAIR DOs and technical components to support the creation and management of FAIR DOs: The Typed PID Maker providing machine actionable interfaces for creating, validating, and managing PIDs with machine-actionable metadata stored in their PID record, or the FAIR DO testbed, currently evolving into the FAIR DO Lab, serving as reference implementation for setting up a FAIR DO ecosystem. However, introducing FAIR DOs is not only about providing technical services, but also requires the definition and agreement on interfaces, policies, and processes. A first step in this direction was made in the context of HMC by agreeing on a Helmholtz Kernel Information Profile. In the concept of FAIR DOs, PID Kernel Information is key to machine actionability of digital content. Strongly relying on Data Types and stored in the PID record directly at the PID resolution service, PID Kernel Information is allowed to be used by machines for fast decision making. In this session, we will shortly present the Helmholtz Kernel Information Profile and a first demonstrator allowing the semi-automatic creation of FAIR DOs for arbitrary DOIs accessible via the well-known Zenodo repository

    A common PID Kernel Information Profile for the German Helmholtz Association of Research Centres

    Get PDF
    In the concept of FAIR Digital Objects, PID Kernel Information is key to machine actionability of digital content. Strongly relying on Data Types and stored in a PID record directly at the PID resolution service, allows PID Kernel Information to be used by machines for fast decision making. To make a first step into the direction of standardizing PID Kernel Information, the RDA Working Group on PID Kernel Information has defined a first proposal of a core Kernel Information Profile (KIP) together with a list of seven guiding principles helping to decide on which information could be part of a KIP and which information should be stored elsewhere. The Helmholtz Metadata Collaboration (HMC) Platform is a joint endeavor across all research areas of the Helmholtz Association, the largest association of large-scale research centers in Germany. The goal of HMC is to make the depth and breadth of research data produced by Helmholtz Centres findable, accessible, interoperable, and reusable (FAIR) for the whole science community. To reach this goal, the concept of FAIR Digital Objects has been chosen as top-level commonality across all research fields and their existing and future infrastructures. In order to fulfill this role, a common Helmholtz KIP has been agreed on serving as basis for all FAIR Digital Objects created in the context of HMC. This poster describes the Helmholtz KIP and elaborates on decisions leading to differences compared to the core KIP recommended by the RDA. While remaining mostly compatible to the RDA core KIP, the Helmholtz KIP adds some additional properties that satisfy the multidisciplinary environment it is made for. Thus, it serves as a good starting point for rolling out the FAIR Digital Object concept over all Research Data Management Infrastructures of the Helmholtz Association and beyond. In addition, the poster provides a first impression of a demonstrator, which is currently under development and should serve as showcase. In the first step, we will allow to transform arbitrary datasets from Zenodo into FAIR Digital Objects using our Helmholtz KIP. In a next step, we plan to also include datasets from infrastructures hosted at Helmholtz Centres to create a huge and unprecedented network of FAIR Digital Objects, which provides scientists with an incredible pool of linked and searchable research data. This work has been supported by the research program ‘Engineering Digital Futures’ of the Helmholtz Association of German Research Centers and the Helmholtz Metadata Collaboration Platform

    Guidance on Versioning of Digital Assets

    Get PDF
    Versioning of data and metadata is a crucial - but often overlooked - topic in scientific work. Using the wrong version of a (meta)data set can lead to drastically difference outcomes in interpretation, and lead to substantial, propagating downstream errors. At the same time, past versions of (meta)data sets are valuable records of the research process which should be preserved for transparency and complete reproducibility. Further, the final version of (meta)data sets may actually include errors that previous versions did not. Thus, careful version control is the foundation for trust in and broad reusability of research and operational (meta)data. This document provides an introduction to the principles of versioning, technical recommendations on how to manage version histories, and discusses some pitfalls and possible solutions. In the first part of this document, we present examples of change processes that require proper management and introduce popular versioning schemes. Finally, the document presents recommended practices for researchers as well as for infrastructure developers

    An interpretation of the FAIR principles to guide implementations in the HMC digital ecosystem

    Get PDF
    Findable, accessible, interoperable and reusable (FAIR) set principles that determine best practice for managing the dissemination and ensuring longevity of digital resources. The Helmholtz Metadata Collaboration (HMC) provides guidance on metadata and related topics to those working in the Helmholtz ecosystem. Given the complexity - both of the FAIR principles, and the Helmholtz ecosystem - we interpret the principles so they can be directly applicable to the Helmholtz context. In this interpretation we consider managers, tool-developers, data managers, and researchers amongst others; and provide guidance to these disparate roles on applying the FAIR principles in their professional lives

    Determinization and ambiguity of classical and probabilistic BĂŒchi automata

    No full text
    BĂŒchi automata can be seen as a straight-forward generalization of ordinary NFA, adapted to handle infinite words. While they were originally introduced for applications in decidability of logics, they became a popular tool for practical applications, e.g. in automata-based approaches to model checking and synthesis problems. Being arguably both the simplest and most well-known variant in the zoo of so-called omega-automata that are considered in this setting, they serve as an intermediate representation of omega-regular specifications ofverification or synthesis requirements that are usually expressed in a more declarative fashion, e.g. using linear temporal logic. Unfortunately, nondeterministic automata are not directly suitable for certain applications, whereas deterministic BĂŒchi automata are less expressive. This problem is usually solved by either constructing deterministic automata of a different kind, or by restricting their ambiguity, i.e., the maximal number of accepting runs on some word. In both cases, the transformation is expensive, yielding an exponential blow-up of the state space in the worst case. Therefore, optimized constructions and heuristics for common special cases are useful and in demand for actual practical applications. In this thesis new results concerning both approaches are presented. On one hand, we present a new general construction for determinization from nondeterministic BĂŒchi to deterministic parity automata that unifies the dominant branches of previous approaches based on the Safra construction and the Muller-Schupp construction. Additionally, we provide a set of new heuristics, some of which exploit properties of our unified construction. Furthermore, we characterize the ambiguity of BĂŒchi automata by a hierarchy that is determined by simple syntactical patterns in the automata, and present a new construction that reduces the ambiguity of an automaton. Apart from the classical nondeterministic and deterministic variants of automata, it is natural to consider probabilistic automata, i.e., automata that instead of utilizing nondeterminism use a probability distribution on the states to decide which state to go to next. It is known that in general, such automata are more expressive than classical automata. We show that subclasses of probabilistic automata that correspond to certain classes of the previously mentioned ambiguity hierarchy are not more expressive than classical automata, providing constructions to obtain classical BĂŒchi automata from them

    Automating Metadata Handling in Research Software Engineering

    No full text
    Modern research is heavily dependent on software. The landscape of research software engineering is evolving at a high pace, and the effective handling of metadata plays a pivotal role in ensuring software discoverability, reproducibility, and general project quality. Properly curating metadata can, however, become a time-consuming task, while manual curation is error-prone at the same time. This poster introduces two new tools for streamlining metadata management: somesy and fair-python-cookiecutter. Somesy (software metadata synchronization) provides a user-friendly command-line interface that assists in the synchronization of software project metadata. Somesy supports best-practice metadata standards such as CITATION.cff and CodeMeta and automatically maintains metadata, such as essential project information (names, versions, authors, licenses), consistently across multiple files. This ensures metadata integrity and frees additional time for developers and maintainers to focus on their work. The fair-python-cookiecutter is a GitHub repository template which provides a structured foundation for Python projects. The template provides researchers and RSEs with support in meeting the increasing demands for software metadata during development of Python tools and libraries. By cloning and applying the template to their projects, developers can benefit from the incorporated best practices, recommendations for software development, and software project metadata to ensure quality and facilitate citation of their work. The fair-python-cookiecutter is aligned with and inspired by standards like DLR Software Engineering Guidelines, OpenSSF Best Practices, REUSE, CITATION.cff, CodeMeta. Furthermore, it uses somesy to enhance software metadata FAIRness. The template comes with detailed documentation and thus offers an accessible framework for achieving software quality and discoverability within academia
    corecore